# High-density parameters
MT Gen1 Gemma 3 12B
This is a pre-trained language model merged using the mergekit tool, employing the DARE TIES method to fuse multiple Gemma3-12B variant models.
Large Language Model
Transformers

M
zelk12
118
3
Magtie V1 12B
Apache-2.0
MagTie-v1-12B is a 12B-parameter language model merged using the DARE TIES algorithm, combining the strengths of multiple pre-trained models
Large Language Model
Transformers

M
grimjim
32
2
MT Gemma 3 12B
This project uses the mergekit and DARE TIES methods to merge the soob3123/amoral-gemma3-12B-v2 and IlyaGusev/saiga_gemma3_12b models, aiming to provide more powerful language processing capabilities.
Large Language Model
Transformers

M
zelk12
1,348
2
Elvenmaid 12B V2
ElvenMaid-12B-v2 is a 12B parameter language model based on the ChatML format, created by merging multiple pre-trained models using mergekit's TIES method, supporting English and Japanese interactions.
Large Language Model
Transformers Supports Multiple Languages

E
yamatazen
50
4
MT2 Gen11 Gemma 2 9B
This is a 9B-parameter language model based on the Gemma-2-9B series, fused using the DARE TIES method, combining multiple optimized versions of the Gemma model.
Large Language Model
Transformers

M
zelk12
41
3
MT Gen10 Gemma 2 9B
This is a multi-model fusion version based on the Gemma-2-9B series models, merged using the DARE TIES method, integrating the strengths of multiple Gemma variants.
Large Language Model
Transformers

M
zelk12
26
2
Reasoning SCE Coder V1.0
A 32B-parameter large language model constructed based on the SCE fusion method, integrating multiple high-performance pre-trained models
Large Language Model
Transformers

R
BenevolenceMessiah
235
3
MS3 RP Broth 24B
Apache-2.0
An intermediate model during the Tantum merging process, created by merging multiple 24B-parameter Mistral and Llama3 variants, suitable for role-playing and text generation tasks.
Large Language Model
Transformers English

M
d-rang-d
337
6
Featured Recommended AI Models